51 research outputs found

    Open Band: Audiotype

    Get PDF
    Open Band is a collective performance, that deals with a contradiction of the social media, that is the apartness of the individual on their devices social media, to propose a collective sound intervention, were a conductor interacts with the audience through an anonymous chat interface that converts text into sound messages. In this version, we are working only with web audio synthesis, based on an idea of audio typography

    Consistency of timbre patterns in expressive music performance

    No full text
    International audienceMusical interpretation is an intricate process due to the inter- action of the musician's gesture and the physical possibilities of the instrument. From a perceptual point of view, these elements induce variations in rhythm, acoustical energy and timbre. This study aims at showing the importance of timbre variations as an important attribute of musical interpretation. For this purpose, a general protocol aiming at emphasizing specific timbre patterns from the analysis of recorded musical sequences is proposed. An example of the results obtained by analyzing clarinet sequences is presented, showing stable timbre variations and their correlations with both rhythm and energy deviations. this study, we aim at checking if timbre also follows systematic variations on natural clarinet sounds. We shall first describe a general methodology developed to analyze and compare recorded musical performances in order to point out consistency of timbre, rhythmic and intensity patterns in expressive music performance. An application of this methodol- ogy to twenty recorded musical sequences of the same clarinettist is then given. Eventually, we show that timbre, as rhythm and in

    Real-time Percussive Technique Recognition and Embedding Learning for the Acoustic Guitar

    Full text link
    Real-time music information retrieval (RT-MIR) has much potential to augment the capabilities of traditional acoustic instruments. We develop RT-MIR techniques aimed at augmenting percussive fingerstyle, which blends acoustic guitar playing with guitar body percussion. We formulate several design objectives for RT-MIR systems for augmented instrument performance: (i) causal constraint, (ii) perceptually negligible action-to-sound latency, (iii) control intimacy support, (iv) synthesis control support. We present and evaluate real-time guitar body percussion recognition and embedding learning techniques based on convolutional neural networks (CNNs) and CNNs jointly trained with variational autoencoders (VAEs). We introduce a taxonomy of guitar body percussion based on hand part and location. We follow a cross-dataset evaluation approach by collecting three datasets labelled according to the taxonomy. The embedding quality of the models is assessed using KL-Divergence across distributions corresponding to different taxonomic classes. Results indicate that the networks are strong classifiers especially in a simplified 2-class recognition task, and the VAEs yield improved class separation compared to CNNs as evidenced by increased KL-Divergence across distributions. We argue that the VAE embedding quality could support control intimacy and rich interaction when the latent space's parameters are used to control an external synthesis engine. Further design challenges around generalisation to different datasets have been identified.Comment: Accepted at the 24th Int. Society for Music Information Retrieval Conf., Milan, Italy, 202

    Jamming with a Smart Mandolin and Freesound-based Accompaniment

    Get PDF
    This paper presents an Internet of Musical Things ecosystem involving musicians and audiences interacting with a smart mandolin, smartphones, and the Audio Commons online repository Freesound. The ecosystem has been devised to sup- port performer-instrument and performer-audience interactions through the generation of musical accompaniments exploiting crowd-sourced sounds. We present two use cases investigating how audio content retrieved from Freesound can be leveraged by performers or audiences to produce accompanying soundtracks for music performance with a smart mandolin. In the performer- instrument interaction use case, the performer can select content to be retrieved prior to performing through a set of keywords and structure it in order to create the desired accompaniment. In the performer-audience interaction use case, a group of audience members participates in the music creation by selecting and arranging Freesound audio content to create an accompaniment collaboratively. We discuss the advantages and limitations of the system with regard to music making and audience participation, along with its implications and challenges

    Open band: Audience Creative Participation Using Web Audio Synthesis

    Get PDF
    This work investigates a web-based open environment enabling collaborative music experiences. We propose an artifact, Open Band, which enables collective sound dialogues in a web “agora”, blurring the limits between audience and performers. The systems relies on a multi-user chat system where textual inputs are translated to sounds. We depart from individual music playing experiences in favor of creative participation in networked music making. A previous implementation associated typed letters to pre-composed samples. We present and discuss in this paper a novel instance of our system which operates using Web Audio synthesis

    ProgGP: From GuitarPro Tablature Neural Generation To Progressive Metal Production

    Full text link
    Recent work in the field of symbolic music generation has shown value in using a tokenization based on the GuitarPro format, a symbolic representation supporting guitar expressive attributes, as an input and output representation. We extend this work by fine-tuning a pre-trained Transformer model on ProgGP, a custom dataset of 173 progressive metal songs, for the purposes of creating compositions from that genre through a human-AI partnership. Our model is able to generate multiple guitar, bass guitar, drums, piano and orchestral parts. We examine the validity of the generated music using a mixed methods approach by combining quantitative analyses following a computational musicology paradigm and qualitative analyses following a practice-based research paradigm. Finally, we demonstrate the value of the model by using it as a tool to create a progressive metal song, fully produced and mixed by a human metal producer based on AI-generated music.Comment: Pre-print accepted for publication at CMMR202
    corecore